Search Results: "sven"

21 December 2020

Sven Hoexter: docker buildx sugar - dumping results to disk

The latest docker 20.10.x release unlocks the buildx subcommands which allow for some sugar, like building something in a container and dumping the result to your local directory in one command. Dockerfile
FROM docker-registry.mycorp.com/debian-node:lts as builder
USER service
COPY . /opt/service
RUN cd /opt/service; npm install; npm run build
FROM scratch as dist
COPY --from=builder /opt/service/dist /
build with
docker buildx build --target=dist --output type=local,dest=$(pwd)/pages/ .
Here we build a page, copy the result with all assets from the /opt/service/dist directory to an empty image and dump it into the local pages directory.

20 December 2020

Sven Hoexter: Mock a Serial pty Device with socat

Another note to myself before I forget about this nifty usage of socat again. I was looking for something to mock a serial device, similar to a microcontroller which usually ends up as /dev/ttyACM0 and might output some text. What I found is a very helpful post on stackoverflow showing an example utilizing socat.
$ socat -d -d pty,rawer pty,rawer
2020/12/20 21:37:53 socat[29130] N PTY is /dev/pts/8
2020/12/20 21:37:53 socat[29130] N PTY is /dev/pts/11
2020/12/20 21:37:53 socat[29130] N starting data transfer loop with FDs [5,5] and [7,7]
Write whatever you need to the second pty, here /dev/pts/11, e.g.
$ i=0; while :; do echo "foo: $ i " > /dev/pts/11; let i++; sleep 5; done
Now you can listen with whatever you like, e.g. some tool you work on, on the fist pty, here /dev/pts/8. For demonstration purpose just use cat:
$ cat /dev/pts/8
foo: 0
foo: 1
socat is an awesome tool, looking through the manpage you need some knowledge about sockets, but it's incredibly vesatile.

13 December 2020

Jan Wagner: Monitoring Plugins 2.3 released

While our last release has matured for quite a little time, there raised demands within our community for a new release. The development has settled this fall and @sni was already using master for a while in production, so we thought about to release. Anyway Debian Freeze is coming, let's cut a new upstream release! The last question was: Who should cut the release? The last releases was done by @JeremyHolger, but these days everybody is short on time. Fortunately Holger has documented the whole release process very well, so I jumped on the band wagon and slipped into the release wizard role.
Surprisingly it seems I was able to fix all faults I've done in the release process (beside using the 2.2 link in the release announcement mail) and kicked the release of monitoring-plugins 2.3 successfully. Grab the new Monitoring Plugins release while it's hot (and give the new check_curl a try)!
Hopefully monitoring-plugins 2.3 will hit Debian experimental soon.

14 October 2020

Sven Hoexter: Nice Helper to Sanitize File Names - sanity.pl

One of the most awesome helpers I carry around in my ~/bin since the early '00s is the sanity.pl script written by Andreas Gohr. It just recently came back to use when I started to archive some awesome Corona enforced live session music with youtube-dl. Update: Francois Marier pointed out that Debian contains the detox package, which has a similar functionality.

18 September 2020

Sven Hoexter: Avoiding the GitHub WebUI

Now that GitHub released v1.0 of the gh cli tool, and this is all over HN, it might make sense to write a note about my clumsy aliases and shell functions I cobbled together in the past month. Background story is that my dayjob moved to GitHub coming from Bitbucket. From my point of view the WebUI for Bitbucket is mediocre, but the one at GitHub is just awful and painful to use, especially for PR processing. So I longed for the terminal and ended up with gh and wtfutil as a dashboard. The setup we have is painful on its own, with several orgs and repos which are more like monorepos covering several corners of infrastructure, and some which are very focused on a single component. All workflows are anti GitHub workflows, so you must have permission on the repo, create a branch in that repo as a feature branch, and open a PR for the merge back into master. gh functions and aliases
# setup a token with perms to everything, dealing with SAML is a PITA
export GITHUB_TOKEN="c0ffee4711"
# I use a light theme on my terminal, so adjust the gh theme
export GLAMOUR_STYLE="light"
#simple aliases to poke at a PR
alias gha="gh pr review --approve"
alias ghv="gh pr view"
alias ghd="gh pr diff"
### github support functions, most invoked with a PR ID as $1
#primary function to review PRs
function ghs  
    gh pr view $ 1 
    gh pr checks $ 1 
    gh pr diff $ 1 
 
# very custom PR create function relying on ORG and TEAM settings hard coded
# main idea is to create the PR with my team directly assigned as reviewer
function ghc  
    if git status   grep -q 'Untracked'; then
        echo "ERROR: untracked files in branch"
        git status
        return 1
    fi
    git push --set-upstream origin HEAD
    gh pr create -f -r "$(git remote -v   grep push   grep -oE 'myorg-[a-z]+')/myteam"
 
# merge a PR and update master if we're not in a different branch
function ghm  
    gh pr merge -d -r $ 1 
    if [[ "$(git rev-parse --abbrev-ref HEAD)" == "master" ]]; then
        git pull
    fi
 
# get an overview over the files changed in a PR
function ghf  
    gh pr diff $ 1    diffstat -l
 
# generate a link to a commit in the WebUI to pass on to someone else
# input is a git commit hash
function ghlink  
    local repo="$(git remote -v   grep -E "github.+push"   cut -d':' -f 2   cut -d'.' -f 1)"
    echo "https://github.com/$ repo /commit/$ 1 "
 
wtfutil I have a terminal covering half my screensize with small dashboards listing PRs for the repos I care about. For other repos I reverted back to mail notifications which get sorted and processed from time to time. A sample dashboard config looks like this:
github_admin:
  apiKey: "c0ffee4711"
  baseURL: ""
  customQueries:
    othersPRs:
      title: "Pull Requests"
      filter: "is:open is:pr -author:hoexter -label:dependencies"
  enabled: true
  enableStatus: true
  showOpenReviewRequests: false
  showStats: false
  position:
    top: 0
    left: 0
    height: 3
    width: 1
  refreshInterval: 30
  repositories:
    - "myorg/admin"
  uploadURL: ""
  username: "hoexter"
  type: github
The -label:dependencies is used here to filter out dependabot PRs in the dashboard. Workflow Look at a PR with ghv $ID, if it's ok ACK it with gha $ID. Create a PR from a feature branch with ghc and later on merge it with ghm $ID. The $ID is retrieved from looking at my wtfutil based dashboard. Security Considerations The world is full of bad jokes. For the WebUI access I've the full array of pain with SAML auth, which expires too often, and 2nd factor verification for my account backed by a Yubikey. But to work with the CLI you basically need an API token with full access, everything else drives you insane. So I gave in and generated exactly that. End result is that I now have an API token - which is basically a password - which has full power, and is stored in config files and environment variables. So the security features created around the login are all void now. Was that the aim of it after all?

8 September 2020

Sven Hoexter: Cloudflare Bot Management, MITM Boxes and TLS 1.3

This is just a "warn your brothers" post for those who use Cloudflare Bot Management, and have customers which use MITM boxes to break up TLS 1.3 connections. Be aware that right now some heuristic rules in the Cloudflare Bot Management score TLS 1.3 requests made by some MITM boxes with 1 - which equals "we're 99.99% sure that this is none human browser traffic". While technically correct - the TLS connection hitting the Cloudflare Edge node is not established by a browser - that does not help your customer if you block those requests. If you do something like blocking requests with a BM score of 1 at the Cloudflare Edge, you might want to reconsider that at the moment and sent a captcha challenge instead. While that is not a lot nicer, and still pisses people off, you might find a balance there between protecting yourself and still having some customers. I've a confirmation for this happening with Cisco WSA, but it's likely to be also the case with other vendors. Breaking up TLS 1.2 seems to be stealthy enough in those appliances that it's not detected, so this issue creeps in with more enterprises rolling out modern browser. You can now enter youself here a rant about how bad the client-server internet of 2020 is, and how bad it is that some of us rely on Cloudflare, and that they have accumulated a way too big market share. But the world is as it is. :(

24 August 2020

Sven Hoexter: google cloud buster images without python 2

Update I have to stand corrected. noahm@ wrote me, because the Debian Cloud Image maintainer only ever included python explicitly in Azure images. The most likely explanation for the change in the Google images is that Google just ported the last parts of their own software to python 3, and subsequently removed python 2. With some relieve one can just conclude it's only our own fault that we did not build our own images, which include all our own dependencies. Take it as reminder to always build your own images. Always. Be it VMs or docker. Build your own image. Original Post Fun in the morning, we realized that the Debian Cloud image builds dropped python 2 and that propagated to the Google provided Debian/buster images. So in case you use something like ansible, and so far assumed python 2 as the default interpreter, and installed additional python 2 modules to support ansible modules, you now have to either install python 2 again or just move to python 3k. We just try to suffer it through now, and set interpreter_python = auto in our ansible.cfg to anticipate the new default behaviour, which is planned for ansible 2.12. See also https://docs.ansible.com/ansible/latest/reference_appendices/interpreter_discovery.html Other lesson to learn here: The GCE Debian stable images are not stable. Blends in nicely with this rant, though it's not 100% a Google Cloud foul this time.

14 August 2020

Sven Hoexter: Retrieve WLAN PSK via nmcli

Note to myself so I do not have to figure that out every few month when I've to dig out a WLAN PSK from my existing configuration. Step 1: Figure out the UUID of the network:
$ nmcli con show
NAME                  UUID                                  TYPE      DEVICE          
br-59d010130b86       d8672d3d-7cf6-484f-9ef8-e6ec3e73bef7  bridge    br-59d010130b86 
FRITZ!Box 7411        1ed1cec1-f586-4e75-ba6d-c9f6f4cba6e2  wifi      wlp4s0
[...]
Step 2: Request to view the PSK for this network based on the UUID
$ nmcli --show-secrets --fields 802-11-wireless-security.psk con show '1ed1cec1-f586-4e75-ba6d-c9f6f4cba6e2'
802-11-wireless-security.psk:           0815471123420511111

13 August 2020

Sven Hoexter: An Average IT Org

Supply chain attacks are a known issue, and also lately there was a discussion around the relevance of reproducible builds. Looking in comparison at an average IT org doing something with the internet, I believe the pressing problem is neither supply chain attacks nor a lack of reproducible builds. The real problem is the amount of prefabricated binaries supplied by someone else, created in an unknown build environment with unknown tools, the average IT org requires to do anything. The Mess the World Runs on By chance I had an opportunity to look at what some other people I know use, and here is the list I could compile by scratching just at the surface: Of course there are also all the script language repos - Python, Ruby, Node/Typescript - around as well. Looking at myself, who's working in a different IT org but with a similar focus, I have the following lingering around on my for work laptop and retrieved it as a binary from a 3rd party: Yes some of that is even non-free and might contain spyw^telemetry. Takeway I By guessing based on Pareto Principle probably 80% of the software mentioned above is also open source software. But, and here we leave Pareto behind, close to none is build by the average IT org from source. Why should the average IT org care about advanced issues like supply chain attacks on source code and mitigations, when it already gets into very hot water the day DockerHub closes down, HashiCorp moves from open core to full proprietary or Elastic decides to no longer offer free binary builds? The reality out there seems to be that infrastructure of "modern" IT orgs is managed similar to the Windows 95 installation of my childhood. You just grab running binaries from somewhere and run them. The main difference seems to be that you no longer have the inconvenience of downloading a .xls from geocities you've to rename to .rar and that it's legal. Takeway II In the end the binary supply is like a drug for the user, and somehow the Debian project is also just another dealer / middle man in this setup. There are probably a lot of open questions to think about in that context. Are we the better dealer because we care about signed sources we retrieve from upstream and because we engage in reproducible build projects? Are our own means of distributing binaries any better than a binary download from github via https with a manual checksum verification, or the Debian repo at download.docker.com? Is the approach of the BSD/Gentoo ports, where you have to compile at least some software from source, the better one? Do I really want to know how some of the software is actually build? Or some more candid ones like is gnutls a good choice for the https support in apt and how solid is the gnupg code base? Update: Regarding apt there seems to be some movement.

17 July 2020

Sven Hoexter: Debian and exFAT

As some might have noticed we now have Linux 5.7 in Debian/unstable and subsequently the in kernel exFAT implementation created by Samsung available. Thus we now have two exFAT implementations, the exfat fuse driver and the Linux based one. Since some comments and mails I received showed minor confusions, especially around the available tooling, it might help to clarify a bit what is required when. Using the Samsung Linux Kernel Implementation Probably the most common use case is that you just want to use the in kernel implementation. Easy, install a Linux 5.7 kernel package for your architecture and either remove the exfat-fuse package or make sure you've version 1.3.0-2 or later installed. Then you can just run mount /dev/sdX /mnt and everything should be fine. Your result will look something like this:
$ mount grep sdb
/dev/sdb on /mnt type exfat (rw,relatime,fmask=0022,dmask=0022,iocharset=utf8,errors=remount-ro
In the past this basic mount invocation utilized the mount.exfat helper, which was just a symlink to the helper which is shipped as /sbin/mount.exfat-fuse. The link was dropped from the package in 1.3.0-2. If you're running a not so standard setup, and would like to keep an older version of exfat-fuse installed you must invoke mount -i to prevent mount from loading any helper to mount the filesystem. See also man 8 mount. For those who care, mstone@ and myself had a brief discussion about this issue in #963752, which quickly brought me to the conclusion that it's in the best interest of a majority of users to just drop the symlink from the package. Sticking to the Fuse Driver If you would like to stick to the fuse driver you can of course just do it. I plan to continue to maintain all packages for the moment. Just keep the exfat-fuse package installed and use the mount.exfat-fuse helper directly. E.g.
$ sudo mount.exfat-fuse /dev/sdb /mnt
FUSE exfat 1.3.0
$ mount grep sdb
/dev/sdb on /mnt type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
In case this is something you would like to make permanent, I would recommend that you create yourself a mount.exfat symlink pointing at the mount.exfat-fuse helper. mkfs and fsck - exfat-utils vs exfatprogs Beside of the filesystem access itself we now also have two implementations of tooling, which support filesystem creation, fscking and label adjustment. The older one is exfat-utils by the creator of the fuse driver, which is also part of Debian since the fuse driver was packaged in 2012. New in Debian is the exfatprogs package written by the Samsung engineers. And here the situation starts to get a bit more complicated. Both packages ship a mkfs.exfat and fsck.exfat tool, so we can not co-install them. In the end both packages declare a conflict with each other at the moment. As outlined in this thread I do not plan to overcomplicate the situation by using the alternatives system. I do feel strongly that this would just create more confusion without a real benefit. Since the tools do not have matching cli options, that could also cause additional issues and confusion. I plan to keep that as is, at least for the bullseye release. Afterwards it's possible, depending on how the usage evolves, to drop the mkfs.exfat and fsck.exfat from exfat-utils, they are in fact again only symlinks. pain point might be tools interfacing with the differing implementations. Currently I see only three reverse depedencies, so that should be manageable to consolidate if required. Last but not least it might be relevant to mention that the exfat-utils package also contains a dumpexfat tool which could be helpful if you're more into forensics, or looking into other lower level analysis of an exFAT filsystem. Thus there is a bit of an interest to have those tools co-installed in some - I would say - niche cases. buster-backports Well if you use buster with a backports kernel you're a bit on your own. In case you want to keep the fuse driver installed, but would still like to mount, e.g. for testing, with the kernel exFAT driver, you must use mount -i. I do not plan any uploads to buster-backports. If you need a mkfs.exfat on buster, I would recommend to just use the one from exfat-utils for now. It has been good enough for the past years, should not get sour before the bullseye release, which ships exfatprogs for you. Kudos My sincere kudos go to:

28 May 2020

Bits from Debian: New Debian Developers and Maintainers (March and April 2020)

The following contributors got their Debian Developer accounts in the last two months: The following contributors were added as Debian Maintainers in the last two months: Congratulations!

16 May 2020

Sven Hoexter: Quick and Dirty: masquerading / NAT with nftables

Since nftables is now the new default, a short note to myself on how to setup masquerading, like the usual NAT setup you use on a gateway.
nft add table nat
nft add chain nat postrouting   type nat hook postrouting priority 100 \;  
nft add rule nat postrouting ip saddr 192.168.1.0/24 oif wlan0 masquerade
In this case the wlan0 is basically the "WAN" interface, because I use an old netbook as a wired to wireless network adapter.

19 April 2020

Sven Hoexter: Emulating Raspi2 like hardware with Rasbian in 2020

To follow some older (as in two years) ARM assembler howto, I searched for a quick and dirty way to run a current Rasbian with qemu 4.2 on Debian/unstable. The end result are the following notes to get that up and running:
# Download a binary device tree file and matching kernel a good soul uploaded to github
wget https://github.com/vfdev-5/qemu-rpi2-vexpress/raw/master/kernel-qemu-4.4.1-vexpress
wget https://github.com/vfdev-5/qemu-rpi2-vexpress/raw/master/vexpress-v2p-ca15-tc1.dtb
# Download the official Rasbian image without X
wget -O raspbian_lite_latest.zip https://downloads.raspberrypi.org/raspbian_lite_latest
unzip raspbian_lite_latest.zip
# Convert it from the raw image to a qcow2 image and add some space
qemu-img convert -f raw -O qcow2 2020-02-13-raspbian-buster-lite.img rasbian.qcow2
qemu-img resize rasbian.qcow2 +2G
# start qemu
qemu-system-arm -m 2048M -M vexpress-a15 -cpu cortex-a15 \
 -kernel kernel-qemu-4.4.1-vexpress -no-reboot \
 -smp 2 -serial stdio \
 -dtb vexpress-v2p-ca15-tc1.dtb -sd rasbian.qcow2 \
 -append "root=/dev/mmcblk0p2 rw rootfstype=ext4 console=ttyAMA0,15200 loglevel=8" \
 -nic user,hostfwd=tcp::5555-:22
# login at the serial console as user pi with password raspberry
sudo -i
# enable ssh
systemctl enable ssh
# resize partition and filesystem
parted /dev/mmcblk0 resizepart 2 100%
resize2fs /dev/mmcblk0p2
Now I can login via ssh and start to play:
ssh pi@localhost -p 5555
So for me that is sufficient, I have network connectivity to install an editor, transfer files and can otherwise work with tmux to have some session multiplexing. Additional Notes

2 April 2020

Sven Hoexter: New TLDs and Automatic link detection was a bad idea

Update: Seems this is a Firefox specific bug in the Slack Webapplication, it works in Chrome and the Slack Electron Application as it should. Tested with Firefox ESR on Debian/buster and Firefox 74 on OS X. Ah I like it that we now have so many TLDs and matching on those seems to go bad more often now. Last occassion is Slack (which I think is a pile of shit written by morons, but that is a different story) which somehow does not properly match on .co domains. Leading to this auto linking: nsswitch.conf link Now I'm not sure if someone enocountered the same issue, or people just registered random domains just because they could. I found registrations for I've a few more .conf files in /etc which could be interesting in an IT environment, but for the sake of playing with it I registered nsswitch.co at godaddy. I do not want to endorse them in anyway, but for the first year it's only 13.08EUR right now, which is okay to pay for a stupid demo. So if you feel like it, you can probably register something stupid for yourself to play around with. I do not intent to renew this domain next year, so be aware of what happens then with the next owner.

29 March 2020

Sven Hoexter: Looking into Envertech Enverbridge EVB 202 SetID tool

Disclaimer: I'm neither an experienced programmer nor proficient in reverse engineering, but I like at least to try to figure out how things work. Sometimes the solution is so easy, that even I manage to find it, still take this with a grain of salt. I lately witnessed the setup of an Envertech EnverBridge ENB-202 which is kind of a classic Chinese IoT device. Buy it, plug it in, use some strange setup software, and it will report your PV statistics to a web portal. The setup involved downloading a PE32 Windows executable, with an UI that basically has two input boxes and a sent button. You've to input the serial number(s) of your inverter boxes and the ID of your EnverBridge. That made me interested in what this setup process really looks like. The EnverBridge device itself has on one end a power plug, which is also used to communicate with the inverter via some Powerline protocol, and a network plug with a classic RJ45 end you plug into your network. If you power it up it will request an IPv4 address via DHCP. That brings us to the first oddity, the MAC address is in the BC:20:90 prefix which I could not find in the IEEE lists. Setting Up the SetID Software You can download the Windows software in a Zipfile, once you unpack it you end up with a Nullsoft installer .exe. Since this is a PE32 executable we've to add i386 as foreign architecture to install the wine32 package.
dpkg --add-architecture i386
apt update
apt install wine32:i386
wget http://www.envertec.com/uploads/bigfiles/Set%20ID.zip
unzip Set\ ID.zip
wine Set\ ID.exe
The end result is an installation in ~/.wine/drive_c/Program Files/SetID which reveals that this software is build with Qt5 according to the shipped dll's. The tool itself is in udpCilentNow.exe, and looks like this: Envertech SetID software The Network Communication To my own surprise, the communication is straight forward. A single UDP paket is sent to the broadcast address (255.255.255.255) on port 8765. Envertech SetID udp paket in wireshark I expected some strange binary protocol, but the payload is just simple numbers. They're a combination of the serial numbers of the inverter and the ID of the Enverbridge device. One thing I'm not 100% sure about are the inverter serial numbers, there are two of them, but on the inverter I've seen the serial numbers are always the same. So the payload is assembled like this: If you've more inverter the serials are just appended in the same way. Another strange thing is that the software does close to no input validation, they only check that the inverter serials start with CN and then just extract char 9 to 16. The response from the EnverBridge is also a single UDP paket to the broadcast address on port 8764, with exactly the same content we've sent. Writing a Replacement My result is probably an insult for all proficient Python coders, but based on a bit of research and some cut&paste programming I could assemble a small script to replicate the function of the Windows binary. I guess the usefulness of this exercise was mostly my personal entertainment, though it might help some none Windows users to setup this device. Usage is also very simple:
./enverbridge.py -h
Usage: enverbridge.py [options] MIIDs
Options:
  -h, --help         show this help message and exit
  -b BID, --bid=BID  Serial Number of your EnverBridge
./enverbridge.py -b 90087654 CN19100912345678 CN19100912345679
This is basically 1:1 the behaviour of the Windows binary, though I tried to add a bit more validation then the original binary and some more error messages. I also assume the serial numbers are also always the same so I take only one as the input, and duplicate it for the paket data.

12 November 2017

Sven Hoexter: Offering a Simtec Entropy Key

Since I started to lean a bit towards the concept of minimalism I've got rid of stuff, including all stationary computers. So for now I'm left with just my laptop and that's something where I do not want to attach an USB entropy key permanently. That's why I've a spare Simtec Entropy Key I no longer use, and I'm willing to sell. In case someone is interested, I'm willing to give it away for 20EUR + shipping. If you can convince me it'll be of use for the Debian project (end up on a DSA managed machine for example) I'm willing to give it away for less. If you're located in Cologne, Copenhagen or Barcelona we might be able, depending on the timing, to do a personal handover (with or without keysigning). Otherwise I guess shipping is mainly interesting for someone also located in Europe. You can use sven at stormbind dot net or hoexter at debian dot org to contact me and use GPG key 0xA6DC24D9DA2493D1.

29 September 2017

Sven Hoexter: Last rites to the lyx and elyxer packaging

After having been a heavy LyX user from 2005 to 2010 I've continued to maintain LyX more or less till now. Finally I'm starting to leave that stage and removed myself from the Uploaders list. The upload with some other last packaging changes is currently sitting in the git repo. Mainly because lintian on ftp-master currently rejects 'pakagename@packages.d.o' maintainer addresses (the alternative to the lists.alioth.d.o maintainer mailinglists). For elyxer I filled a request for removal. It hasn't seen any upstream activity for a while and the LyX build in HTML export support improved. My hope is that if I step away far enough someone else might actually pick it up. I had this strange moment when I lately realized that xchat got reintroduced to Debian after mapreri and myself spent some time last year to get it removed before the stretch release.

8 September 2017

Sven Hoexter: munin with TLS

Primarily a note for my future self so I don't have to find out what I did in the past once more. If you're running some smaller systems scattered around the internet, without connecting them with a VPN, you might want your munin master and nodes to communicate with TLS and validate certificates. If you remember what to do it's a rather simple and straight forward process. To manage the PKI I'll utilize the well known easyrsa script collection. For this special purpose CA I'll go with a flat layout. So it's one root certificate issuing all server and client certificates directly. Some very basic docs can be also found in the munin wiki. master setup For your '/etc/munin/munin.conf':
tls paranoid
tls_verify_certificate yes
tls_private_key /etc/munin/master.key
tls_certificate /etc/munin/master.crt
tls_ca_certificate /etc/munin/ca.crt
tls_verify_depth 1
A node entry with TLS will look like this:
[node1.stormbind.net]
    address [2001:db8::]
    use_node_name yes
Important points here: For easy-rsa the following command invocations are relevant:
./easyrsa init-pki
./easyrsa build-ca
./easrsa gen-req master
./easyrsa sign-req client master
./easyrsa set-rsa-pass master nopass
node setup For your '/etc/munin/munin-node.conf':
tls paranoid
tls_verify_certificate yes
tls_private_key /etc/munin/node1.key
tls_certificate /etc/munin/node1.crt
tls_ca_certificate /etc/munin/ca.crt
tls_verify_depth 1
For easy-rsa the following command invocations are relevant:
./easyrsa gen-req node1
./easyrsa sign-req server node1
./easyrsa set-rsa-pass node1 nopass
Important points here:

31 August 2017

Paul Wise: FLOSS Activities August 2017

Changes

Issues

Review

Administration
  • myrepos: get commit/admin access from joeyh at DebConf17, add commit/admin access for other patch submitters, apply my stack of patches
  • Debian: fix weird log file issues, redirect hardware donor, cleaned up a weird dir, fix some OOB info, ask for TLS on meetings-archive.d.n, check an I/O error, restart broken stunnels, powercycle 1 borked machine,
  • Debian mentors: lintian/security updates & reboot
  • Debian wiki: remove some stray cache files, whitelist 3 email domains, whitelist some email addresses, disable 1 spammer account, disable 1 accounts with bouncing email,
  • Debian QA: apply patch to fix PTS watch file errors, deploy changes
  • Debian derivatives census: run scripts for Purism, remove some noise from logs, trigger a recheck, merge fix by Unit193, deploy changes
  • Openmoko: security updates, reboots, enable unattended-upgrades

Communication
  • Attended DebConf17 and provided some input in BoFs
  • Sent Misc Dev News #44
  • Invite Google gLinux (on IRC) to the Debian derivatives census
  • Welcome Sven Haardiek (of GreenboneOS) to the Debian derivatives census
  • Inquire about the status of Canaima

Sponsors The samba bug report was sponsored by my employer. All other work was done on a volunteer basis.

22 August 2017

Sven Hoexter: whois databases and registration spam

Lately I experienced some new kind of spam, at least new to myself. Seems that spammer abuse registration input fields that do not implement strong enough validation, and echo back several values from the registration process in some kind of welcome mail. Basically filling the spam message in name and surname fields. So far I found a bunch of those originating from the following AS: AS49453, AS50896, AS200557 and AS61440. The first three belong to something identifying itself as "QUALITYNETWORK". The last one, AS61440, seems to be involved only partially with some networks being delegated to "Atomohost". To block them it's helpful to query the public radb service whois.radb.net for all networks belonging to the specific AS like this:
whois -h whois.radb.net -- '-i origin AS50896'
Another bunch of batch capable whois services are provided by Team Cymru. They've some examples at the end of https://www.team-cymru.org/IP-ASN-mapping.html. In this specific case the spam was for "www.robotas.ru" which is currently terminated at CloudFlare and redirects via JS document.location to "http://link31.net/b494d/ooo/", which in turn redirects via JS window.location to "http://revizor-online.ga/" which is again hosted at CloudFlare. The page at the end plays some strange youtube video - currently at around 1900 plays, so not that widely spread. In the end an interesting indicator about spam campaign success.

Next.

Previous.